Google Developers Blog: MediaPipe on the Web
C++をemscriptenでコンパイル
Graphics, renderingはmediapipeからWebGLで普通に動かしている
GLはworkerでも動く?main threadで処理するような特殊な処理をしていないか
While executing WebAssembly is generally much faster than pure JavaScript, it is also usually much slower than native C++, so we made several optimizations in order to provide a better user experience.
We utilize the GPU for image operations when possible, and opt for using the lightest-weight possible versions of all our ML models (giving up some quality for speed).
画像処理はGPU
However, since compute shaders are not widely available for web, we cannot easily make use of TensorFlow Lite GPU machine learning inference, and the resulting CPU inference often ends up being a significant performance bottleneck. So to help alleviate this, we automatically augment our “TfLiteInferenceCalculator” by having it use the XNNPack ML Inference Library, which gives us a 2-3x speedup in most of our applications.